288 research outputs found

    Measuring Cerebral Activation From fNIRS Signals: An Approach Based on Compressive Sensing and Taylor-Fourier Model

    Get PDF
    Functional near-infrared spectroscopy (fNIRS) is a noninvasive and portable neuroimaging technique that uses NIR light to monitor cerebral activity by the so-called haemodynamic responses (HRs). The measurement is challenging because of the presence of severe physiological noise, such as respiratory and vasomotor waves. In this paper, a novel technique for fNIRS signal denoising and HR estimation is described. The method relies on a joint application of compressed sensing theory principles and Taylor-Fourier modeling of nonstationary spectral components. It operates in the frequency domain and models physiological noise as a linear combination of sinusoidal tones, characterized in terms of frequency, amplitude, and initial phase. Algorithm performance is assessed over both synthetic and experimental data sets, and compared with that of two reference techniques from fNIRS literature

    Algorithm and software to automatically identify latency and amplitude features of local field potentials recorded in electrophysiological investigation

    Get PDF
    A function that is called by main_script.m to compute the onset and the maximum latencies and amplitudes from the signal time-derivative. Also the functions that guarantee the correct running of main_script.m. To test the algorithm, invoking only main_script.m is necessary (all the other functions must be contained in the same folder). (M 1 kb

    Connectivity Analysis in EEG Data: A Tutorial Review of the State of the Art and Emerging Trends

    Get PDF
    Understanding how different areas of the human brain communicate with each other is a crucial issue in neuroscience. The concepts of structural, functional and effective connectivity have been widely exploited to describe the human connectome, consisting of brain networks, their structural connections and functional interactions. Despite high-spatial-resolution imaging techniques such as functional magnetic resonance imaging (fMRI) being widely used to map this complex network of multiple interactions, electroencephalographic (EEG) recordings claim high temporal resolution and are thus perfectly suitable to describe either spatially distributed and temporally dynamic patterns of neural activation and connectivity. In this work, we provide a technical account and a categorization of the most-used data-driven approaches to assess brain-functional connectivity, intended as the study of the statistical dependencies between the recorded EEG signals. Different pairwise and multivariate, as well as directed and non-directed connectivity metrics are discussed with a pros-cons approach, in the time, frequency, and information-theoretic domains. The establishment of conceptual and mathematical relationships between metrics from these three frameworks, and the discussion of novel methodological approaches, will allow the reader to go deep into the problem of inferring functional connectivity in complex networks. Furthermore, emerging trends for the description of extended forms of connectivity (e.g., high-order interactions) are also discussed, along with graph-theory tools exploring the topological properties of the network of connections provided by the proposed metrics. Applications to EEG data are reviewed. In addition, the importance of source localization, and the impacts of signal acquisition and pre-processing techniques (e.g., filtering, source localization, and artifact rejection) on the connectivity estimates are recognized and discussed. By going through this review, the reader could delve deeply into the entire process of EEG pre-processing and analysis for the study of brain functional connectivity and learning, thereby exploiting novel methodologies and approaches to the problem of inferring connectivity within complex networks

    Wearable continuous glucose monitoring sensors: A revolution in diabetes treatment

    Get PDF
    Worldwide, the number of people affected by diabetes is rapidly increasing due to aging populations and sedentary lifestyles, with the prospect of exceeding 500 million cases in 2030, resulting in one of the most challenging socio-health emergencies of the third millennium. Daily management of diabetes by patients relies on the capability of correctly measuring glucose concentration levels in the blood by using suitable sensors. In recent years, glucose monitoring has been revolutionized by the development of Continuous Glucose Monitoring (CGM) sensors, wearable non/minimally-invasive devices that measure glucose concentration by exploiting different physical principles, e.g., glucose-oxidase, fluorescence, or skin dielectric properties, and provide real-time measurements every 1–5 min. CGM opened new challenges in different disciplines, e.g., medicine, physics, electronics, chemistry, ergonomics, data/signal processing, and software development to mention but a few. This paper first makes an overview of wearable CGM sensor technologies, covering both commercial devices and research prototypes. Then, the role of CGM in the actual evolution of decision support systems for diabetes therapy is discussed. Finally, the paper presents new possible horizons for wearable CGM sensor applications and perspectives in terms of big data analytics for personalized and proactive medicine

    Preprocessing by a Bayesian Single-Trial Event-Related Potential Estimation Technique Allows Feasibility of an Assistive Single-Channel P300-Based Brain-Computer Interface

    Get PDF
    A major clinical goal of brain-computer interfaces (BCIs) is to allow severely paralyzed patients to communicate their needs and thoughts during their everyday lives. Among others, P300-based BCIs, which resort to EEG measurements, have been successfully operated by people with severe neuromuscular disabilities. Besides reducing the number of stimuli repetitions needed to detect the P300, a current challenge in P300-based BCI research is the simplification of system’s setup and maintenance by lowering the number N of recording channels. By using offline data collected in 30 subjects (21 amyotrophic lateral sclerosis patients and 9 controls) through a clinical BCI with N=5 channels, in the present paper we show that a preprocessing approach based on a Bayesian single-trial ERP estimation technique allows reducing N to 1 without affecting the system’s accuracy. The potentially great benefit for the practical usability of BCI devices (including patient acceptance) that would be given by the reduction of the number N of channels encourages further development of the present study, for example, in an online setting

    Regularised Model Identification Improves Accuracy of Multisensor Systems for Noninvasive Continuous Glucose Monitoring in Diabetes Management

    Get PDF
    Continuous glucose monitoring (CGM) by suitable portable sensors plays a central role in the treatment of diabetes, a disease currently affecting more than 350 million people worldwide. Noninvasive CGM (NI-CGM), in particular, is appealing for reasons related to patient comfort (no needles are used) but challenging. NI-CGM prototypes exploiting multisensor approaches have been recently proposed to deal with physiological and environmental disturbances. In these prototypes, signals measured noninvasively (e.g., skin impedance, temperature, optical skin properties, etc.) are combined through a static multivariate linear model for estimating glucose levels. In this work, by exploiting a dataset of 45 experimental sessions acquired in diabetic subjects, we show that regularisation-based techniques for the identification of the model, such as the least absolute shrinkage and selection operator (better known as LASSO), Ridge regression, and Elastic-Net regression, improve the accuracy of glucose estimates with respect to techniques, such as partial least squares regression, previously used in the literature. More specifically, the Elastic-Net model (i.e., the model identified using a combination of l1{l}_{1} and l2{l}_{2} norms) has the best results, according to the metrics widely accepted in the diabetes community. This model represents an important incremental step toward the development of NI-CGM devices effectively usable by patients

    Glycaemic variability-based classification of impaired glucose tolerance vs. type 2 diabetes using continuous glucose monitoring data

    Get PDF
    Many glycaemic variability (GV) indices extracted from continuous glucose monitoring systems data have been proposed for the characterisation of various aspects of glucose concentration profile dynamics in both healthy and non-healthy individuals. However, the inter-index correlations have made it difficult to reach a consensus regarding the best applications or a subset of indices for clinical scenarios, such as distinguishing subjects according to diabetes progression stage. Recently, a logistic regression-based method was used to address the basic problem of differentiating between healthy subjects and those affected by impaired glucose tolerance (IGT) or type 2 diabetes (T2D) in a pool of 25 GV-based indices. Whereas healthy subjects were classified accurately, the distinction between patients with IGT and T2D remained critical. In the present work, by using a dataset of CGM time-series collected in 62 subjects, we developed a polynomial-kernel support vector machine-based approach and demonstrated the ability to distinguish between subjects affected by IGT and T2D based on a pool of 37 GV indices complemented by four basic parameters—age, sex, BMI, and waist circumference—with an accuracy of 87.1%.Peer reviewe

    Expected accuracy of proximal and distal temperature estimated by wireless sensors, in relation to their number and position on the skin

    Get PDF
    A popular method to estimate proximal/distal temperature (TPROX and TDIST) consists in calculating a weighted average of nine wireless sensors placed on pre-defined skin locations. Specifically, TPROX is derived from five sensors placed on the infra-clavicular and mid-thigh area (left and right) and abdomen, and TDIST from four sensors located on the hands and feet. In clinical practice, the loss/removal of one or more sensors is a common occurrence, but limited information is available on how this affects the accuracy of temperature estimates. The aim of this study was to determine the accuracy of temperature estimates in relation to number/position of sensors removed. Thirteen healthy subjects wore all nine sensors for 24 hours and reference TPROX and TDIST time-courses were calculated using all sensors. Then, all possible combinations of reduced subsets of sensors were simulated and suitable weights for each sensor calculated. The accuracy of TPROX and TDIST estimates resulting from the reduced subsets of sensors, compared to reference values, was assessed by the mean squared error, the mean absolute error (MAE), the cross-validation error and the 25th and 75th percentiles of the reconstruction error. Tables of the accuracy and sensor weights for all possible combinations of sensors are provided. For instance, in relation to TPROX, a subset of three sensors placed in any combination of three non-homologous areas (abdominal, right or left infra-clavicular, right or left mid-thigh) produced an error of 0.13°C MAE, while the loss/removal of the abdominal sensor resulted in an error of 0.25°C MAE, with the greater impact on the quality of the reconstruction. This information may help researchers/clinicians: i) evaluate the expected goodness of their TPROX and TDIST estimates based on the number of available sensors; ii) select the most appropriate subset of sensors, depending on goals and operational constraints
    • …
    corecore